Nexus, UCS and Hypervisors
We have recently come up against problems of installation/configuration of virtualization in innovative HW environments for the creation of a virtual mini datacenter.
The activity was carried out in collaboration with the Clara Consortium, with which we carried out the design and implementation of the virtual rack / datacenter in four arms. The equipment was UCS C200M1 with dual port QLogic QLE8152 (FCoE 10GE) CNA cards, a Cisco Nexus 5010 FCoE + FC module and an entrylevel IBM FC storage
The use of the configuration was not intended to have the value of a production system, so it was not intended for business critical applications (no HA). We want to publish this case, to expose the solution methodologies to be undertaken in case the problems due to innovative hardware and drivers that are too recent, block other professionals. We have involved specialized technical forums and specialized technical personnel from various parts of the world. We immediately moved towards virtualization by selecting one of the two hypervisors currently of excellence in the world (ESXi4.0 and XenServer 5.6), rather than a private cloud solution, because it would have required the sacrifice of one of the twin servers as management. The solution using Vmware’s ESXi4.0 free hypervisor would have required the use of the VMware vSphere client, but without a vCenter it would have required several accesses for each server to be managed, while the use of Citrix’s XenServer 5.6 hypervisor free, allows you to administer all machines through a single Citrix XenCenter management free client. The problems have arisen, for both hypervisors, due to the CNA (Converged Network Adapter) cards, as FCoE is a technology that has been establishing itself for a short time, and also the Cisco UCS servers are also too recent in the hardware field. In the ESXi4.0 case it was necessary to install three drivers to make everything work properly, in the XenServer5.6 case the drivers for the CNAs came out while the installations/configurations were being tested and through an open channel from feedback to the brand new drivers the fantastic support of Citrix technicians provided us with indications to activate and correctly configure all the potential and functionality of the hardware.
Installation with ESXi 4.0 :
As you can see from the following open thread on the VMware forum , you need to install:
- QLogic Fibre Channel and Converged Network Adapter CIM Provider – from Qlogic site
- vmware-esx-drivers-net-qlge_400.1.0.0.39-1.0.4.00000.261179 – from vmware (ethernet driver components for ESXi 4.0)
- vmware-esx-drivers-scsi-qla2xxx_400.831.k1.25vmw-1.0.4.00000.242037 – from vmware (scsi fibre channel driver component for ESX 4.0)
#Adapter CIM Provider
esxupdate –bundle offline-bundle.zip –nodeps –nosigcheck –maintenancemode update
FCoE FC-SCSI #driver
ESXupdate –bundle=qlg.831.k1.25vmw-offline_bundle-242037.zip update
#driver FCoE ETH
ESXupdate –bundle=qlgc-qlge-100.21-offline_bundle-261179.zip update
We then tried with the new ESXi 4.1 release, obtaining results only for the FC portion of the CNA, while for this version of hypervisor, there are currently no bulletins to install the drivers capable of unlocking the ethernet portion (10GE) of the CNA cards.
Installation with XenServer 5.6 :
For this installation we followed the instructions of the drivers released in the very first on July 6, 2010
, then subsequently updated on August 6, 2010, perhaps precisely because of our reports. Citrix then supplemented them with a
new article
where it also brings a firmware update of the CNA boards.
Again, there are different procedures depending on whether the drivers are installed after XenServer or at the same time. in our case they started from a pre-existing installation we did before the release of the drivers, so we had to install them on a XenServer running
mkdir -p /mnt/tmp
mount /dev/<path to cd-rom>
/mnt/tmp
cd /mnt/tmp/
cd qlogic#qla2xxx
./install.sh
CD.. /qlogic#qlge/
./install.sh
CD/
umount /mnt/tmp# Manual PIF creation to picked up by xen API the “new” network interfaces – for every Eth port
xe pif-introduce mac=<phisical-address> device=eth2 host-uuid=<UUID of the host>
At this point, the configuration work previously done on IBM storage and the innovative Nexus network device, dedicated to FCoE technology, allowed us to complete the work brilliantly.
For the configuration of the Nexus we got a hand from the excellent video by Jason Nash Varrow, which is related to the configuration of a Nexus 5020 in an ESX environment.
The steps taken to set up the Nexus were:
- Activation of the Features and Licenses Required
- Activating the FC module
- Creating a VLAN and a VSAN and associating the two
- Creation of Virtual FCInterfaces (VFCs) with assignment (Bind) to the respective 10G Ethernet physical interfaces
- Associate VFCs and physical HR with the created VSAN
- Trunk Mode Setting on the Respective 10G Ethernet Interfaces
- Powering Up All Affected Interfaces
- Creating aliases for the WWNs of the hosts to be linked (the CNAs) and associating the aliases with the created VSAN
- Assigning aliases to a zone
- Assigning the Created Zone to a Zoneset
- Activating the Zoneset
All the activities were carried out in an excellent environment of collaboration between us and the excellent and highly trained staff of the Clara Consortium, so as to be able to enormously speed up the completion of the works.